Close

%0 Conference Proceedings
%4 sid.inpe.br/sibgrapi/2019/09.10.17.27
%2 sid.inpe.br/sibgrapi/2019/09.10.17.27.20
%@doi 10.1109/SIBGRAPI.2019.00041
%T On Modeling Context from Objects with a Long Short-Term Memory for Indoor Scene Recognition
%D 2019
%A Laranjeira, Camila,
%A Lacerda, Anisio,
%A Nascimento, Erickson R.,
%@affiliation Universidade Federal de Minas Gerais
%@affiliation Universidade Federal de Minas Gerais
%@affiliation Universidade Federal de Minas Gerais
%E Oliveira, Luciano Rebouças de,
%E Sarder, Pinaki,
%E Lage, Marcos,
%E Sadlo, Filip,
%B Conference on Graphics, Patterns and Images, 32 (SIBGRAPI)
%C Rio de Janeiro, RJ, Brazil
%8 28-31 Oct. 2019
%I IEEE Computer Society
%J Los Alamitos
%S Proceedings
%K Indoor Scene Recognition, Recurrent Neural Networks.
%X Recognizing indoor scenes is still regarded an open challenge on the Computer Vision field. Indoor scenes can be well represented by their composing objects, which can vary in angle, appearance, besides often being partially occluded. Even though Convolutional Neural Networks are remarkable for image-related problems, the top performances on indoor scenes are from approaches modeling the intricate relationship of objects. Knowing that Recurrent Neural Networks were designed to model structure from a given sequence, we propose representing an image as a sequence of object-level information in order to feed a bidirectional Long Short-Term Memory network trained for scene classification. We perform a Many-to-Many training approach, such that each element outputs a scene prediction, allowing us to use each prediction to boost recognition. Our method outperforms RNN-based approaches on MIT67, an entirely indoor dataset, while also improved over the most successful methods through an ensemble of classifiers.
%@language en
%3 PID6127653.pdf


Close